Goto

Collaborating Authors

 post-training iterative hierarchical data augmentation


Post-training Iterative Hierarchical Data Augmentation for Deep Networks

Neural Information Processing Systems

In this paper, we propose a new iterative hierarchical data augmentation (IHDA) method to fine-tune trained deep neural networks to improve their generalization performance. The IHDA is motivated by three key insights: (1) Deep networks (DNs) are good at learning multi-level representations from data.

  feature space, name change, post-training iterative hierarchical data augmentation, (7 more...)


Review for NeurIPS paper: Post-training Iterative Hierarchical Data Augmentation for Deep Networks

Neural Information Processing Systems

In this paper, the authors propose a new method for training deep networks that relies on learning a generative model for data augmentations. The resulting generating model is derived from a variational auto-encoder (VAE) and trained to learn augmentations for each hidden representation in the network. The authors test this new method for data augmentation and fine-tuning on CIFAR-10, CIFAR-100 and ImageNet. On all of these benchmarks, the authors identified substantial gains in terms of classification accuracy. The reviewers raised concerns in terms of the theoretical grounding of the method, the strength of the baselines, the training time in practice and clarity of presentation of the complex method.

  generative model, neurips paper, post-training iterative hierarchical data augmentation, (7 more...)

Post-training Iterative Hierarchical Data Augmentation for Deep Networks

Neural Information Processing Systems

In this paper, we propose a new iterative hierarchical data augmentation (IHDA) method to fine-tune trained deep neural networks to improve their generalization performance. The IHDA is motivated by three key insights: (1) Deep networks (DNs) are good at learning multi-level representations from data. Accordingly, the IHDA performs DA in a deep feature space, at level l, by transforming it into a distribution space and synthesizing new samples using the learned distributions for data points that lie in hard-to-classify regions, which is estimated by analyzing the neighborhood characteristics of each data point. The synthesized samples are used to fine-tune the parameters of the subsequent layers. The same procedure is then repeated for the feature space at level l 1.

  deep network, feature space, post-training iterative hierarchical data augmentation, (4 more...)